A comparison of JULES-ES-1p0 wave01 members against the original ensemble (wave00).
Wave 01 input parameter sets were picked using History matching to fall within Andy Wiltshires basic constraints on NBP, NPP, cSoil and cVeg stocks at the end of the 20th century. We use 300 of the 500 members, keeping back 2/5ths for emulator validation later.
We answer some basic questions.
What proportion of the new ensemble match AW’s constraints?
How good is a GP emulator? Does it get better overall with the new ensemble members added? In particular, does it get better for those members within the AW constraints?
Does the sensitivity analysis change?
Load libraries, functions and data.
There are no NAs but some relative humidity values are infinite. There are no “low NPP” ensemble members
## [1] 117464.6
## [1] FALSE
## row col
## [1,] 140 9
## [2,] 232 9
## [3,] 249 9
## [4,] 300 9
## [1] Inf Inf Inf Inf
## [1] "rh_lnd_sum"
Global mean for the 20 years at the end of the 20th Century. There is still a significant low bias on cVeg output.
### What proportion of models now fall within Andy’s constraints?
A third! Better than before, but still not great. Pointing at a significant model discrepency in cVeg
Of the 300 members of the wave01 ensemble, 100 pass Andy Wiltshire’s Level 2 constraints.
## [1] 100
Pairs plot of the inputs that pass the constraints with respect to the limits of the original ensemble.
This is a plot of timeseries of Wave00, Wave01, and level2-constrained wave01 on top of one another. We see that the wave01 is closer to the standard than wave00, and the level-2 constrained wave01 ensemble is often closer again. However, there are still quite large discrepancies. For example, baresoilfrac is often way too high, shrubfrac is often too low (though both these span the standard). Treefrac is away from zero, but still often too low or too high. While fHarvest looks good, fLuc does not appear constrained by the process at all. RH (soil respiration) looks well constrained, whereas lai is often too low.
One thing we could do next is constrain input space again, using observations or “tolerance to error” on some or all of these outputs.
We could also extend sensitivity analysis to work out what controls e.g. treefrac.
We hope that running the new ensemble gives us a better emulator, and allows us to rule out more input space. We particularly hope that the emulator is better for those members that are inside AW’s constraints.
First, we can look at the emulator errors in two cases: The level1a data (a basic carbon cycle), and then with the Wave01 data, which should have similar characteristics. (We should have eliminated really bad simulations, but wave01 is not constrained the data perfectly to be within AW constraints.)
## nbp_lnd_sum npp_nlim_lnd_sum cSoil_lnd_sum cVeg_lnd_sum
## 583 213 317 312
## nbp_lnd_sum npp_nlim_lnd_sum cSoil_lnd_sum cVeg_lnd_sum
## 314 243 243 243
Found the outlier - looks like it’s 440
## integer(0)
The top row shows the leave-one-out prediction accuracy of the original wave00 ensemble, and the lower row the entire wave00 AND wave01 ensemble combined.
We see that the error stats for some of the outputs from wave01 are worse, but there are many more ensemble members that lie within the constraints for wave 01.
“pmae” is “proportional mean absolue error”, which is the mean absolute error expressed as a percentage of the original (minimally constrained) ensemble range in that output.
This gives us an idea of how good the emulator is where it really matters, and as the members are consistent, gives us a fairer idea of whether the emulators have improved with more members.
Good news is, the emulators are more accurate for wave01.
These leave-one-out prediction accuracy plots rank the ensemble members from largest underprediction to largest overprediction using the wave00 predictions. A perfect prediction would appear on the horizontal “zero” line.
Many of the wave01 predictions are closer to the horizontal line, and therefore more accurate predictions.
None of the predictions are outside the uncertainty bounds, which suggests they are overconservative (should be smaller).
Looking at the proportional mean absolute error (pmae), expressed in percent, we can see that it doesn’t improve much for the whole ensemble, but does improve significantly for the subset of ensemble members that fall within AW’s constraints from the first ensemble (marked "_sub").
pmae_wave00 <- lapply(loostats_km_Y_level1a, FUN = function(x) x$pmae )
pmae_wave01 <- lapply(loostats_km_Y_level1a_wave01, FUN = function(x) x$pmae )
pmae_wave00_sub <- lapply(loostats_km_Y_level1a_sub, FUN = function(x) x$pmae )
pmae_wave01_sub <- lapply(loostats_km_Y_level1a_wave01_sub, FUN = function(x) x$pmae )
pmae_table <- cbind(pmae_wave00, pmae_wave01, pmae_wave00_sub, pmae_wave01_sub)
print(pmae_table)
## pmae_wave00 pmae_wave01 pmae_wave00_sub pmae_wave01_sub
## [1,] 4.980068 4.927639 7.243424 4.913122
## [2,] 4.282053 4.00755 4.804418 4.085174
## [3,] 3.597279 3.790101 4.555768 3.83427
## [4,] 4.241398 4.516014 4.814413 3.226234
Calculate the atmospheric growth rate of 1984- 2013 using a simple linear fit
## [1] "correlation agr vs cnbp (all members)"
## [2] "-0.0407011946805412"
## [1] "correlation agr vs cnbp (wave01)" "0.00334447281747124"
##
## Call:
## lm(formula = agr_modern_ens ~ cnbp_modern_ens)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.047413 -0.003802 0.003838 0.006715 0.023090
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1.114e-01 5.176e-04 215.173 <2e-16 ***
## cnbp_modern_ens -1.379e-09 1.519e-09 -0.908 0.364
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.01155 on 497 degrees of freedom
## Multiple R-squared: 0.001657, Adjusted R-squared: -0.0003522
## F-statistic: 0.8247 on 1 and 497 DF, p-value: 0.3643
##
## Call:
## lm(formula = agr_modern_wave01 ~ cnbp_modern_wave01)
##
## Residuals:
## Min 1Q Median 3Q Max
## -4639 -4639 -4639 -4639 1382335
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.639e+03 4.639e+03 1.000 0.318
## cnbp_modern_wave01 5.475e-29 9.483e-28 0.058 0.954
##
## Residual standard error: 80210 on 298 degrees of freedom
## Multiple R-squared: 1.119e-05, Adjusted R-squared: -0.003344
## F-statistic: 0.003333 on 1 and 298 DF, p-value: 0.954
Interannual variability and cumulative NBP
(correlations are close to zero, especially in the later wave)
## [1] 0.03223847
## [1] 0.003344484
Using Atmospheric Growth Rate as an example, how close can we get the model to observations? Can we do better than standard? What are the trade offs of doing so? How does getting close in AGR affect performance in other outputs?
We’ve established that most of the original ensemble have an ME/MAE/RMSE larger than the standard run. More (but few) of the wave01 perform better than standard.
A map of the 2D projections of parameter space where the ensemble member performs better than standard.
The blue part is the first wave, and not subject to constraint so may be removed in the second wave (wave01).
Having trouble fitting RMSE, to trying mean error.
Why is there an odd collection at just under 1?
## [1] 100
This next pairs plot looks at all the ensemble members that have a better mean atmospheric growth error than standard.
This next plot looks at all the ensemble members that have a better mean atmospheric growth error than standard AND pass the level 2 constraints.
The number is small (41/300), but the ensemble members seem spread across parameter space.